129 research outputs found

    Design exploration and HW/SW rapid prototyping for real-time system design

    Get PDF
    Embedded signal processing systems are usually associated with real-time constraints and/or high data rates such that fully software implementation are often not satisfactory. In that case, mixed hardware/software implementations are to be investigated. However the increasing complexity of current applications makes classical design processes time consuming and consequently incompatible with an efficient design space exploration. To address this problem, we propose a system-level design based methodology that aims at unifying the design flow from the functional description to the physical HW/SW implementation through functional and architectural flexibility. Our approach consists in automatically refining high abstraction level models through the use of an electronic system-level (ESL) design tool according to function models from the one hand and prototyping platform models from the other hand. We illustrate our methodology with the design of a wireless communication system. We provide design results showing the variety of dedicated architectures that can be investigated with this design flow

    A Computation Core for Communication Refinement of Digital Signal Processing Algorithms

    Get PDF
    International audienceThe most popular Moore's law formulation, which states the number of transistors on integrated circuits doubles every 18 months, is said to hold for at least another two decades. According to this prediction, if we want to take advantage of technological evolutions, designer's productivity has to increase in the same proportions. To take up this challenge, system level design solutions have been set up, but many efforts have still to be done on system modelling and synthesis. In this paper we propose a computation core synthesis methodology that can be integrated on the communication refinement steps of electronic system level design tools. In the proposed approach, computation cores used for digital signal processing application specifications relying on coarse grain communications and synchronizations (e.g. matrix) can be refined into computation cores which can handle fine grain communications and synchronizations (e.g. scalar). Its originality is its ability to synthesize computation cores which can handle fine grain data consumptions and productions which respect the intrinsic partial orders of the algorithms while preserving their original functionalities. Such cores can be used to model fine grain input output overlapping or iteration pipelining. Our flow is based on the analysis of a fine grain signal flow graph used to extract fine grain synchronizations and algorithmic expressions

    Efficient multicore scheduling of dataflow process networks

    Get PDF
    International audienceAlthough multi-core processors are now available everywhere, few applications are able to truly exploit their multiprocessing capabilities. Dataflow programming attempts to solve this problem by expressing explicit parallelism within an application. In this paper, we describe two scheduling strategies for executing a dataflow program on a single-core processor. We also describe an extension of these strategies on multi-core architectures using distributed schedulers and lock-free communications. We show the efficiency of these scheduling strategies on MPEG-4 Simple Profile and MPEG-4 Advanced Video Coding decoders

    Hardware communication refinement in digital signal processing, modelling issues

    Get PDF
    In this paper we present the different modelling problems which a Digital Signal Processing (DSP) application designer has to tackle while refining an abstract specification relying on coarse grain data (e.g. matrices) toward a hardware implementation model relying on fine grain data (e.g. scalar). To address this problematic, we propose a modelling framework which can be used to refine an algorithm specified with coarse grain interfaces to a form which allow, from the functionnality point of view, to model all its fine grain hardware implmentation

    Comparison of Enhancing Methods for Primary/Backup Approach Meant for Fault Tolerant Scheduling

    Get PDF
    This report explores algorithms aiming at reducing the algorithm run-time and rejection rate when online scheduling tasks on real-time embedded systems consisting of several processors prone to fault occurrence. The authors introduce a new processor scheduling policy and propose new enhancing methods for the primary/backup approach and analyse their performances. The studied techniques are as follows: (i) the method of restricted scheduling windows within which the primary and backup copies can be scheduled, (ii) the method of limitation on the number of comparisons, accounting for the algorithm run-time, when scheduling a task on a system, and (iii) the method of several scheduling attempts. Last but not least, we inject faults to evaluate the impact on scheduling algorithms. Thorough experiments show that the best proposed method is based on the combination of the limitation on the number of comparisons and two scheduling attempts. When it is compared to the primary/backup approach without this method, the algorithm run-time is reduced by 23% (mean value) and 67% (maximum value) and the rejection rate is decreased by 4%. This improvement in the algorithm run-time is significant, especially for embedded systems dealing with hard real-time tasks. Finally, we found out that the studied algorithm performs well in a harsh environment

    Synthesis of Multimode digital signal processing systems

    Get PDF
    International audienceIn this paper, we propose a design methodology for implementing a multimode (or multi-configuration) and multi-throughput system into a single hardware architecture. The inputs of the design flow are the data flow graphs (DFGs), representing the different modes (i.e. the different applications to be implemented), with their respective throughput constraints. While traditional approaches merge DFGs together before the synthesis process, we propose to use ad-hoc scheduling and binding steps during the synthesis of each DFG. The scheduling, which assigns operations to specific time steps, maximizes the similarity between the control steps and thus decreases the controller complexity. The binding process, which assigns operations to specific functional units and data to specific storage elements, maximizes the similarity between datapaths and thus minimizes steering logic and register overhead. First results show the interest of the proposed synthesis flow

    Comparison of Different Methods Making Use of Backup Copies for Fault-Tolerant Scheduling on Embedded Multiprocessor Systems

    Get PDF
    International audienceAs transistors scale down, systems are more vulnerable to faults. Their reliability consequently becomes the main concern, especially in safety-critical applications such as automotive sector, aeronautics or nuclear plants. Many methods have already been introduced to conceive fault-tolerant systems and therefore improve the reliability. Nevertheless, several of them are not suitable for real-time embedded systems since they incur significant overheads, other methods may be less intrusive but at the cost of being too specific to a dedicated system. The aim of this paper is to analyse a method making use of two task copies when on-line scheduling tasks on multiprocessor systems. This method can guarantee the system reliability without causing too much overhead and requiring any special hardware components. In addition, it remains general and thus applicable to large amount of systems. Last but not least, this paper studies two techniques of processor allocation policies: the exhaustive search and the first found solution search. It is shown that the exhaustive search is not necessary for efficient fault-tolerant scheduling and that the latter search significantly reduces the computation complexity, which is interesting for embedded systems

    Fault-Tolerant Online Scheduling Algorithms for CubeSats

    Get PDF
    International audienceCubeSats are small satellites operating in harsh space environment. In order to ensure correct functionality on board despite faults, fault tolerant techniques taking into account spatial, time and energy constraints should be considered. This paper presents a software-level solution taking advantage of several processors available on board. Two online scheduling algorithms are introduced and evaluated. The results show their performances and the trade-off between the rejection rate and energy consumption. Last but not least, it is stated that ordering policies achieving low rejection rate when using the algorithm scheduling all tasks as aperiodic are the "Earliest Deadline" and "Earliest Arrival Time". As for the algorithm treating arriving tasks as aperiodic or periodic tasks, the "Minimum Slack" ordering policy provides reasonable results

    Energy-aware Partial-Duplication Task Mapping under Real-Time and Reliability Constraints

    Get PDF
    International audienceAn efficient task execution on multicore platforms can lead to low energy consumption. To achieve that, an Integer Non-Linear Programming (INLP) formulation is proposed that performs task mapping by jointly addressing task allocation, task frequency assignment, and task duplication. The goal is to minimize energy consumption under real-time and reliability constraints. To provide an optimal solution, the original INLP problem is safely transformed to an equivalent Mixed Integer Linear Programming (MILP) problem. The comparison of the proposed approach with existing energy-aware task mapping approaches shows that the proposed approach is able to find solutions when other approaches fail, achieving an overall lower energy consumption

    Composants virtuels comportementaux pour applications de compression d'images

    Get PDF
    Afin de faire face aux nouveaux besoins des applications d'imagerie numérique et au volume croissant de données qu'elles manipulent, des techniques de compression de plus en plus élaborées sont requises, donnant naissance à de nouveaux standards tels que le tout récent JPEG2000 pour le codage des images fixes. Dans cet article, nous nous intéressons à l'implantation sur architecture matérielle d'une chaîne de compression d'images à base de composants virtuels réutilisables. Face à la complexité des algorithmes à implanter et à la variété des profils d'applications supportés par JPEG2000, les méthodes traditionnelles de conception au niveau RTL trouvent leurs limites, c'est pourquoi nous proposons de rehausser le niveau d'abstraction de la spécification et de bénéficier des nouveaux outils de synthèse d'architecture du commerce afin d'introduire la notion de flexibilité architecturale d'un composant virtuel. Nous présentons ici l'application de notre méthodologie, développée dans le cadre du projet RNRT MILPAT, à la conception d'un composant virtuel de haut niveau pour la transformation en ondelettes discrète bidimensionnelle
    • …
    corecore